Serveur d'exploration sur les relations entre la France et l'Australie

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Robust vision‐based underwater homing using self‐similar landmarks

Identifieur interne : 008749 ( Main/Exploration ); précédent : 008748; suivant : 008750

Robust vision‐based underwater homing using self‐similar landmarks

Auteurs : Amaury Negre [France] ; Cedric Pradalier [Australie] ; Matthew Dunbabin [Australie]

Source :

RBID : ISTEX:4B3736AF066B055F4CCDE69002FF8EC8EFAE3978

Descripteurs français

English descriptors

Abstract

Next‐generation autonomous underwater vehicles (AUVs) will be required to robustly identify underwater targets for tasks such as inspection, localization, and docking. Given their often unstructured operating environments, vision offers enormous potential in underwater navigation over more traditional methods; however, reliable target segmentation often plagues these systems. This paper addresses robust vision‐based target recognition by presenting a novel scale and rotationally invariant target design and recognition routine based on self‐similar landmarks that enables robust target pose estimation with respect to a single camera. These algorithms are applied to an AUV with controllers developed for vision‐based docking with the target. Experimental results show that the system performs exceptionally on limited processing power and demonstrates how the combined vision and controller system enables robust target identification and docking in a variety of operating conditions. © 2008 Wiley Periodicals, Inc.

Url:
DOI: 10.1002/rob.20246


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Robust vision‐based underwater homing using self‐similar landmarks</title>
<author>
<name sortKey="Negre, Amaury" sort="Negre, Amaury" uniqKey="Negre A" first="Amaury" last="Negre">Amaury Negre</name>
</author>
<author>
<name sortKey="Pradalier, Cedric" sort="Pradalier, Cedric" uniqKey="Pradalier C" first="Cedric" last="Pradalier">Cedric Pradalier</name>
</author>
<author>
<name sortKey="Dunbabin, Matthew" sort="Dunbabin, Matthew" uniqKey="Dunbabin M" first="Matthew" last="Dunbabin">Matthew Dunbabin</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:4B3736AF066B055F4CCDE69002FF8EC8EFAE3978</idno>
<date when="2008" year="2008">2008</date>
<idno type="doi">10.1002/rob.20246</idno>
<idno type="url">https://api.istex.fr/document/4B3736AF066B055F4CCDE69002FF8EC8EFAE3978/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">000E18</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Corpus" wicri:corpus="ISTEX">000E18</idno>
<idno type="wicri:Area/Istex/Curation">000E18</idno>
<idno type="wicri:Area/Istex/Checkpoint">001116</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Checkpoint">001116</idno>
<idno type="wicri:doubleKey">1556-4959:2008:Negre A:robust:vision:based</idno>
<idno type="wicri:Area/Main/Merge">009010</idno>
<idno type="wicri:Area/Main/Curation">008749</idno>
<idno type="wicri:Area/Main/Exploration">008749</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a" type="main" xml:lang="en">Robust vision‐based underwater homing using self‐similar landmarks</title>
<author>
<name sortKey="Negre, Amaury" sort="Negre, Amaury" uniqKey="Negre A" first="Amaury" last="Negre">Amaury Negre</name>
<affiliation wicri:level="1">
<country wicri:rule="url">France</country>
</affiliation>
<affiliation wicri:level="1">
<country xml:lang="fr">France</country>
<wicri:regionArea>INRIA Rhone Alpes, eMotion, Av. de l'Europe, Montbonnot</wicri:regionArea>
<wicri:noRegion>Montbonnot</wicri:noRegion>
<wicri:noRegion>Montbonnot</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">France</country>
</affiliation>
</author>
<author>
<name sortKey="Pradalier, Cedric" sort="Pradalier, Cedric" uniqKey="Pradalier C" first="Cedric" last="Pradalier">Cedric Pradalier</name>
<affiliation wicri:level="1">
<country wicri:rule="url">Australie</country>
</affiliation>
<affiliation wicri:level="1">
<country xml:lang="fr">Australie</country>
<wicri:regionArea>Autonomous Systems Laboratory, CSIRO ICT Centre, P.O. Box 883, Kenmore, QLD 4069</wicri:regionArea>
<wicri:noRegion>QLD 4069</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Australie</country>
</affiliation>
</author>
<author>
<name sortKey="Dunbabin, Matthew" sort="Dunbabin, Matthew" uniqKey="Dunbabin M" first="Matthew" last="Dunbabin">Matthew Dunbabin</name>
<affiliation wicri:level="1">
<country wicri:rule="url">Australie</country>
</affiliation>
<affiliation wicri:level="1">
<country xml:lang="fr">Australie</country>
<wicri:regionArea>Autonomous Systems Laboratory, CSIRO ICT Centre, P.O. Box 883, Kenmore, QLD 4069</wicri:regionArea>
<wicri:noRegion>QLD 4069</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Australie</country>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="j" type="main">Journal of Field Robotics</title>
<title level="j" type="sub">Special Issue on Field and Service Robotics</title>
<title level="j" type="alt">JOURNAL OF FIELD ROBOTICS</title>
<idno type="ISSN">1556-4959</idno>
<idno type="eISSN">1556-4967</idno>
<imprint>
<biblScope unit="vol">25</biblScope>
<biblScope unit="issue">6‐7</biblScope>
<biblScope unit="page" from="360">360</biblScope>
<biblScope unit="page" to="377">377</biblScope>
<biblScope unit="page-count">18</biblScope>
<publisher>Wiley Subscription Services, Inc., A Wiley Company</publisher>
<pubPlace>Hoboken</pubPlace>
<date type="published" when="2008-06">2008-06</date>
</imprint>
<idno type="ISSN">1556-4959</idno>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">1556-4959</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Acoustics</term>
<term>Algorithm</term>
<term>Apparent radius</term>
<term>Autonomous</term>
<term>Autonomous docking</term>
<term>Blur</term>
<term>Briggs</term>
<term>Brightness normalization</term>
<term>Cambridge university press</term>
<term>Camera model</term>
<term>Circular landmark</term>
<term>Circular target</term>
<term>Computation time</term>
<term>Computer vision</term>
<term>Contrast change</term>
<term>Corke</term>
<term>Detection performance</term>
<term>Detection properties</term>
<term>Directional blur</term>
<term>Displacement model</term>
<term>Docking</term>
<term>Dunbabin</term>
<term>Electromagnetic guidance</term>
<term>Equilateral triangle</term>
<term>Experimental results</term>
<term>Field figure</term>
<term>Field robotics</term>
<term>Function decreases</term>
<term>Function exhibits</term>
<term>Function response</term>
<term>Hartley zisserman</term>
<term>Homing</term>
<term>Homing experiment</term>
<term>Horizontal blur</term>
<term>Horizontal direction</term>
<term>Image contrast</term>
<term>Image frame</term>
<term>Image intensity</term>
<term>Image plane</term>
<term>Incorrect solutions</term>
<term>International conference</term>
<term>Landmark</term>
<term>Lighting conditions</term>
<term>Lighting discontinuity</term>
<term>Lighting problem</term>
<term>Limited processing power</term>
<term>Longitudinal motion</term>
<term>Lookup tables</term>
<term>Lter</term>
<term>Many times</term>
<term>Marine environment</term>
<term>Monocular vision</term>
<term>Monocular vision system</term>
<term>Motion blur</term>
<term>Negre</term>
<term>Observation model</term>
<term>Opencv toolkit</term>
<term>Original landmark</term>
<term>Original landmark application</term>
<term>Outdoor experiments</term>
<term>Passive targets</term>
<term>Perspective transformation</term>
<term>Pixel</term>
<term>Poor lighting conditions</term>
<term>Practical limitations</term>
<term>Range estimation</term>
<term>Real size</term>
<term>Real time</term>
<term>Relative target position</term>
<term>Robotics</term>
<term>Robust</term>
<term>Robust target</term>
<term>Standoff</term>
<term>Standoff distance</term>
<term>Stereo vision system</term>
<term>Target</term>
<term>Target algorithm</term>
<term>Target detection</term>
<term>Target localization</term>
<term>Target location</term>
<term>Target moves</term>
<term>Target position</term>
<term>Target system</term>
<term>Test tank</term>
<term>Underwater docking</term>
<term>Underwater images</term>
<term>Underwater targets</term>
<term>Various transformations</term>
<term>Vehicle control</term>
<term>Visible landmarks</term>
<term>Vision systems</term>
<term>Wiley periodicals</term>
<term>Yates sorrell</term>
</keywords>
<keywords scheme="Teeft" xml:lang="en">
<term>Acoustics</term>
<term>Algorithm</term>
<term>Apparent radius</term>
<term>Autonomous</term>
<term>Autonomous docking</term>
<term>Blur</term>
<term>Briggs</term>
<term>Brightness normalization</term>
<term>Cambridge university press</term>
<term>Camera model</term>
<term>Circular landmark</term>
<term>Circular target</term>
<term>Computation time</term>
<term>Computer vision</term>
<term>Contrast change</term>
<term>Corke</term>
<term>Detection performance</term>
<term>Detection properties</term>
<term>Directional blur</term>
<term>Displacement model</term>
<term>Docking</term>
<term>Dunbabin</term>
<term>Electromagnetic guidance</term>
<term>Equilateral triangle</term>
<term>Experimental results</term>
<term>Field figure</term>
<term>Field robotics</term>
<term>Function decreases</term>
<term>Function exhibits</term>
<term>Function response</term>
<term>Hartley zisserman</term>
<term>Homing</term>
<term>Homing experiment</term>
<term>Horizontal blur</term>
<term>Horizontal direction</term>
<term>Image contrast</term>
<term>Image frame</term>
<term>Image intensity</term>
<term>Image plane</term>
<term>Incorrect solutions</term>
<term>International conference</term>
<term>Landmark</term>
<term>Lighting conditions</term>
<term>Lighting discontinuity</term>
<term>Lighting problem</term>
<term>Limited processing power</term>
<term>Longitudinal motion</term>
<term>Lookup tables</term>
<term>Lter</term>
<term>Many times</term>
<term>Marine environment</term>
<term>Monocular vision</term>
<term>Monocular vision system</term>
<term>Motion blur</term>
<term>Negre</term>
<term>Observation model</term>
<term>Opencv toolkit</term>
<term>Original landmark</term>
<term>Original landmark application</term>
<term>Outdoor experiments</term>
<term>Passive targets</term>
<term>Perspective transformation</term>
<term>Pixel</term>
<term>Poor lighting conditions</term>
<term>Practical limitations</term>
<term>Range estimation</term>
<term>Real size</term>
<term>Real time</term>
<term>Relative target position</term>
<term>Robotics</term>
<term>Robust</term>
<term>Robust target</term>
<term>Standoff</term>
<term>Standoff distance</term>
<term>Stereo vision system</term>
<term>Target</term>
<term>Target algorithm</term>
<term>Target detection</term>
<term>Target localization</term>
<term>Target location</term>
<term>Target moves</term>
<term>Target position</term>
<term>Target system</term>
<term>Test tank</term>
<term>Underwater docking</term>
<term>Underwater images</term>
<term>Underwater targets</term>
<term>Various transformations</term>
<term>Vehicle control</term>
<term>Visible landmarks</term>
<term>Vision systems</term>
<term>Wiley periodicals</term>
<term>Yates sorrell</term>
</keywords>
<keywords scheme="Wicri" type="topic" xml:lang="fr">
<term>Acoustique</term>
<term>Conférence internationale</term>
<term>Milieu marin</term>
<term>Robotique</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Next‐generation autonomous underwater vehicles (AUVs) will be required to robustly identify underwater targets for tasks such as inspection, localization, and docking. Given their often unstructured operating environments, vision offers enormous potential in underwater navigation over more traditional methods; however, reliable target segmentation often plagues these systems. This paper addresses robust vision‐based target recognition by presenting a novel scale and rotationally invariant target design and recognition routine based on self‐similar landmarks that enables robust target pose estimation with respect to a single camera. These algorithms are applied to an AUV with controllers developed for vision‐based docking with the target. Experimental results show that the system performs exceptionally on limited processing power and demonstrates how the combined vision and controller system enables robust target identification and docking in a variety of operating conditions. © 2008 Wiley Periodicals, Inc.</div>
</front>
</TEI>
<affiliations>
<list>
<country>
<li>Australie</li>
<li>France</li>
</country>
</list>
<tree>
<country name="France">
<noRegion>
<name sortKey="Negre, Amaury" sort="Negre, Amaury" uniqKey="Negre A" first="Amaury" last="Negre">Amaury Negre</name>
</noRegion>
<name sortKey="Negre, Amaury" sort="Negre, Amaury" uniqKey="Negre A" first="Amaury" last="Negre">Amaury Negre</name>
<name sortKey="Negre, Amaury" sort="Negre, Amaury" uniqKey="Negre A" first="Amaury" last="Negre">Amaury Negre</name>
</country>
<country name="Australie">
<noRegion>
<name sortKey="Pradalier, Cedric" sort="Pradalier, Cedric" uniqKey="Pradalier C" first="Cedric" last="Pradalier">Cedric Pradalier</name>
</noRegion>
<name sortKey="Dunbabin, Matthew" sort="Dunbabin, Matthew" uniqKey="Dunbabin M" first="Matthew" last="Dunbabin">Matthew Dunbabin</name>
<name sortKey="Dunbabin, Matthew" sort="Dunbabin, Matthew" uniqKey="Dunbabin M" first="Matthew" last="Dunbabin">Matthew Dunbabin</name>
<name sortKey="Dunbabin, Matthew" sort="Dunbabin, Matthew" uniqKey="Dunbabin M" first="Matthew" last="Dunbabin">Matthew Dunbabin</name>
<name sortKey="Pradalier, Cedric" sort="Pradalier, Cedric" uniqKey="Pradalier C" first="Cedric" last="Pradalier">Cedric Pradalier</name>
<name sortKey="Pradalier, Cedric" sort="Pradalier, Cedric" uniqKey="Pradalier C" first="Cedric" last="Pradalier">Cedric Pradalier</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Asie/explor/AustralieFrV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 008749 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 008749 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Asie
   |area=    AustralieFrV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     ISTEX:4B3736AF066B055F4CCDE69002FF8EC8EFAE3978
   |texte=   Robust vision‐based underwater homing using self‐similar landmarks
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Tue Dec 5 10:43:12 2017. Site generation: Tue Mar 5 14:07:20 2024